17 research outputs found

    T-Life: um programa de treino da capacidade de reconhecimento emocional em pessoas com perturbações do espectro autista

    Get PDF
    Os indivíduos com défices de relacionamento interpessoal, especificamente com Perturbação do Espectro Autista (PEA), apresentam dificuldades em reconhecer emoções em si e nos outros, evidenciando claro prejuízo no funcionamento social e tarefas interpessoais do quotidiano. O Reconhecimento das Expressões Faciais (REF) tem sido estudado na tentativa de compreensão dos défices no reconhecimento emocional. Inúmeros trabalhos tentam promover o reconhecimento emocional facial em indivíduos com PEA, mas não através da síntese da face em tempo real. As novas tecnologias, nomeadamente a Realidade Virtual, parecem ser muito promissoras no trabalho com as PEA (Eynon, 1997, cit. in Beardon et al., 2001; Strickland et al., 1996), pois vão ao encontro das necessidades/características destes indivíduos ao permitirem: o controlo na apresentação dos estímulos; a introdução de modificações graduais para permitir a generalização; a garantia de situações seguras de aprendizagem, a intervenção individualizada e customizada e a interacção com computadores, factor motivador que permite a aprendizagem de forma não ansiogénica (Strickland, 1997). A abordagem que estamos a desenvolver e a testar reflecte todos estes pressupostos e enquadra-se numa tipologia de jogo. What a feeling é um videojogo que apresenta como finalidade melhorar a capacidade de indivíduos, social e emocionalmente diminuídos, no reconhecimento de emoções através da expressão facial. O jogo desenvolvido permite, através de um conjunto de exercícios, que qualquer pessoa de qualquer idade possa interagir com modelos 3D e aprender sobre as expressões da face. O jogo é baseado em síntese facial em tempo real. Nesta comunicação descreveremos a mecânica da nossa metodologia de aprendizagem e apresentaremos algumas linhas de orientação para trabalho futuro, resultante dos estudos desenvolvidos com os protótipos testados

    Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation.

    Get PDF
    Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness

    The Rocketbox Library and the Utility of Freely Available Rigged Avatars

    Get PDF
    As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption

    Facial expression animation through action units transfer in latent space

    Get PDF
    Automatic animation synthesis has attracted much attention from the community. As most existing methods take a small number of discrete expressions rather than continuous expressions, their integrity and reality of the facial expressions is often compromised. In addition, the easy manipulation with simple inputs and unsupervised processing, although being important to the automatic facial expression animation applications, is relatively less concerned. To address these issues, we propose an unsupervised continuous automatic facial expression animation approach through action units (AU) transfer in the latent space of generative adversarial networks. The expression descriptor which is depicted with AU vector is transferred into the input image without the need of labeled pairs of images and even without their expressions and further network training. We also propose a new approach to quickly generate input image's latent code and cluster the boundaries of different AU attributes with their latent codes. Two latent code operators, vector addition and continuous interpolation, are leveraged for facial expression animation simulating align with the boundaries in the latent space. Experiments have shown that the proposed approach is effective on facial expression translation and animation synthesis

    Unravelling the population structure and transmission patterns of Mycobacterium tuberculosis in Mozambique, a high TB/HIV burden country

    Get PDF
    Genomic studies of Mycobacterium tuberculosis complex (MTBC) might shed light on the dynamics of its transmission, especially in high-burden settings, where recent outbreaks are embedded in the complex natural history of the disease. We applied Whole-genome sequencing (WGS) to characterize the local population of MTBC, unravel potential transmission links and evaluate associations with host and pathogen factors. Methods A one-year prospective study was conducted in Mozambique, a high HIV/TB burden country. WGS was applied to 295 positive cultures. We combined phylogenetic, geographical and clustering analysis, and investigated associations between risk factors of transmission. Findings A significant high proportion of strains were in recent transmission (45.5%). We fully characterized MTBC isolates by using phylogenetic approaches and dating evaluation. We found two likely endemic clades, comprised of 67 strains, belonging to L1.2, dating from the late XIX century and associated with recent spread among PLHIV. Interpretation Our results unveil the population structure of MTBC in our setting. The clustering analysis revealed an unexpected pattern of spread and high rates of progression, suggesting the failure of control measures. The long-term presence of local strains in Mozambique, which were responsible for large transmission among HIV/TB coinfected patients, hint at possible coevolution with sympatric host populations and challenge the role of HIV in TB transmission.Ministry of Enterprise and Knowledge (Government of Catalonia & European Social Fund, AGAUR fellowship); European Research Council (ERC) European Union’s Horizon 2020.N

    Apparent Biological Motion in First and Third Person Perspective

    No full text
    corecore